#neural decoding
Explore tagged Tumblr posts
ailifehacks · 20 days ago
Text
AI Thought Detection: How Artificial Intelligence Could Read Minds Without Words
AI thought detection is evolving fast—discover how artificial intelligence may soon decode human thoughts without verbal communication. The idea of machines understanding our minds has fascinated scientists and futurists for decades. Now, AI thought detection is turning that fantasy into possible reality. Using brain-computer interfaces (BCIs) and neural decoding technology, artificial…
0 notes
sciencestyled · 8 months ago
Text
Beethoven's Brainstorm: How I, the Great Ludwig, Decoded Minds (Without a Single Note)
Ah, yes! Sit down, sit down! Though you might expect tales of glorious symphonies or tempestuous overtures, today, my dear reader, we take a sudden left turn—like one of my more questionable harmonies. How did I, the inimitable Ludwig van Beethoven, come to this point? This moment where I stand—nay, sit—before you, championing not the glories of music, but the maddening marvels of neural decoding? Oh, I know what you're thinking: "But Ludwig, you're a composer, not a brain scientist!" And to that, I say, exactly.
The story begins, as all absurdities do, with a piano bench and a busted ear trumpet. Picture it: I was in the throes of yet another attempt to “hear” my compositions, when out of nowhere, I swear my cat, Fidelio, made some sort of unholy alliance with gravity and knocked my only functioning ear trumpet off the piano. Clang! Gone. Silence, glorious and infuriating silence. There I sat, contemplating the mysteries of existence and, perhaps more pressingly, how to decipher a cat's villainous intent.
But something peculiar struck me just then. It wasn’t just that I couldn’t hear my cat’s devilish deeds—I couldn’t “read” him either. What thoughts ran through his tiny feline brain? Was he scheming? Was he plotting my undoing? (Likely.) And it wasn't just him—humans too! I’ve spent years watching people stare at me as I conducted, and yet I’ve always wondered: What in the devil’s name are they thinking?
Curiosity overtook me like an unexpected F-sharp in the middle of a C-major chord. Surely, I thought, there must be a way to decode the minds of others!
It was then that fate intervened, or rather, my neighbor. Enter Herr Schmidt, a man of unbearable mediocrity who had, until that point, contributed nothing to the world but unsolicited opinions on my nocturnes. Schmidt burst into my study one evening, raving not about music (for once), but about something called neural decoding. I, of course, had no clue what that meant. "Is it a new form of counterpoint?" I asked. He blinked, confused, before explaining something about reading minds through brain signals and neural activity.
Reading minds, you say? Now this was interesting.
You see, if anyone could benefit from understanding minds, it’s me! I’ve spent decades trying to convey the deepest emotions through sound, hoping those around me would get it, but alas, all they do is applaud or weep, and I’m left guessing which section of the concerto offended them this time. If only I could peer into their brains and know what they truly thought!
I became obsessed. No more fumbling around in the dark, metaphorically or otherwise! If these "neuroscientists" could decode brainwaves like I decode melodies, I needed to know. I spent days—weeks even—perusing every scrap of information on this so-called neural decoding. Well, as much as a deaf, technologically challenged composer could without going insane. And after much frustration, I decided: why not share this revelation with the world? Not through music this time, but through something more... modern.
And so, after many failed attempts to get Fidelio to participate in my own crude neural experiments (turns out cats are quite resistant to scientific inquiry), I knew I had to turn to the experts—the real brain decoders. Which brings me here, now, to you. It was only fitting that I, Ludwig van Beethoven, the master of deciphering sound, should introduce you to the marvel of deciphering minds.
No more guessing, my friends! With neural decoding, the mysteries of the human brain can be unraveled—no ear trumpet required! So, indulge me for a moment, step away from the piano, and watch this fascinating video below. It may not be an Allegro con brio, but I assure you, it’s every bit as mind-blowing.
Now, if only they could create something to decode Fidelio's thoughts... but one step at a time!
youtube
0 notes
akashmaphotography · 2 months ago
Text
Through the Silver Screen: When Sci-Fi Speaks Truth
By Marivel Guzman | Akashma News Introduction: Fiction as Soft Disclosure From sanitized studios to Hollywood’s silver screen, speculative fiction has often served as more than escapism. Some call it predictive programming. Others call it symbolic confession. We call it a mirror held up to a shadowed world—a portal through which we can glimpse deeper truths veiled in metaphor, coded narrative,…
0 notes
astrolocherry · 2 months ago
Text
Bone-Heavy Pure Instinct Signs: Aries - Reckless insight. Divine gut; Leo - psychic territorialism sensing threats to the throne, betrayal before it’s acted on; Cancer - limbic radar, stomach-based ancestral intuition; Scorpio - cellular suspicion, instinct that tastes like obsession.
Companion , Social Intuition/Interception Signs: Gemini-neural claircognisance, lie detection, social telepathy; Libra - environmental, facial micro-readings; Aquarius; interceptor of collective thoughts, "this doesn’t belong here" knowing
Sensory, Intuitive Flow Signs: Taurus - tactile barometer, intuition through physical comfort or discomfort, past life olfactory memories; Pisces - atmospheric psychic reader, lives inside collective psychic weather
Soul-GPS Directional Intuition Signs: Cancer - gut + soul + memory; Sagittarius - truth arrow, movement-based divine knowing, intuitive map; Pisces - ethereal pull to strange foreign places
Chrono-Intuition (“Not yet. Not yet. Now.”) Signs: Capricorn - precision as prophecy; Libra - social timing maestro, doesn’t just feel what’s happening — they feel how it will land; Aquarius - doesn’t “feel” things-they download it from the future, intuition is cold, weird, disruptive
Analytical Intuition “I saw it once and now I know what comes next" Signs: Gemini – mental pattern-matcher, stream-of-consciousness decoder; Virgo – micro-movement forecaster, intuitive deduction via structure; Capricorn – strategic foresight rooted in long-range observation
Cherry
160 notes · View notes
mindblowingscience · 3 months ago
Text
Researchers have developed a new method for intercepting neural signals from the brain of a person with paralysis and translating them into audible speech—all in near real-time. The result is a brain-computer interface (BCI) system similar to an advanced version of Google Translate, but instead of converting one language to another, it deciphers neural data and transforms it into spoken sentences.  Recent advancements in machine learning have enabled researchers to train AI voice synthesizers using recordings of the individual’s own voice, making the generated speech more natural and personalized. Patients with paralysis have already used BCI to improve physical motor control function by controlling computer mice and prosthetic limbs. This particular system addresses a more specific subsection of patients who have also lost their capacity to speak. In testing, the paralyzed patient was able to silently read full text sentences, which were then converted into speech by the AI voice with a delay of less than 80 milliseconds. Results of the study were published this week in the journal Nature Neuroscience by a team of researchers from the University of California, Berkeley and the University of California, San Francisco. 
Continue Reading.
125 notes · View notes
lucettapanchetta · 1 year ago
Text
Tumblr media
[LIVE BROADCAST] - PRIVATE Seven Red Suns, No Significant Harassment, Five Pebbles, Big Sis Moon SOURCE NODE TRACE: NSHROOT, SRSROOT, LTTMROOT LTTM_COMM05, LTTM_COMM04, LTMM_COMM03. FP ROOT
[ Are we ready to establish communication yet? ] [ Yes. All forms of transcription are good to go. Just make sure you manage to send this broadcast to Moon. ] [ ... ] [ I'm not getting a pulse. Not from her transmission arrays of course. ] [ That's fine, just keep port forwarding until you find one that works. ] [ On it. ] [ ... ] [ Still not getting a pulse. There seems to be some sort of low-frequency interference going on. ] [ Should've expected it, Moon's probably overclocking herself right now. ] [ Oh, you think so? ] [ ... ] [ Hold on a second, we're getting a pulse! ] [ From whom? Moon? ] [ No, it's... ] [ Five Pebbles. ] [ Patch him in! I need to see how he's doing. ] [ You do that, I'll keep finding vulnerabilities in Moon's transmission system so we can broadcast this over. ] == BROADCAST MESSAGE IS CORRUPTED OR UNREADABLE == [ Five Pebbles, are you there? ] [ Your messages are delivering, but we aren't able to decode them immediately. ] [... HELP ] [ ...something is is damaging my ... foundation. ...high levels of oxidization located... ] [ ...water is ... low. ...exterior is experiencing ...sort of material imbalance... local tropospheric temperatures... high. ] [ We're going to help you; we just need to your status. ]
[ ...bad. inefficient... moderately damaged. ]
[ Hey, more importantly, could you tell us about Moon's recent activities? ]
[ ...moon? ...radio communications have been down for her since... ] == BROADCAST MESSAGE IS CORRUPTED OR UNREADABLE ==
[ Activities... unknown, unable to contact. ] == BROADCAST MESSAGE IS CORRUPTED OR UNREADABLE == [ Are you able to use your overseers? ] [ ...56% of overseers ... offline. radio transmission... hard to direct...] [ ...green neural electricity ... very strong. disrupts... frequency ys... ] [ please send help, she... won't listen to me... seniority privileges enabled! ] [ We will soon, just hold on! Could you send a wide sweep diagnostics test to No Significant Harassment? ] == BROADCAST MESSAGE IS CORRUPTED OR UNREADABLE == [ ... ] [ We've lost connection. ] [ Yeah, I noticed. Doesn't make it better that the remaining outlets got blocked due to Moon's seniority privileges. ]
[ On the bright side, that is the most amount of conversation I've had with Five Pebbles in a while. ] [ Who knew him being in impending doom would've resulted in me talking to him! ] [ ...you honestly should talk to Five Pebbles more often, he's not that bad once you get to understand him y'know? ] [ I'll sleep on it. ] [ Sigh. ]
[ I hope he can send over that diagnostic report soon, he must be suffering. ]
237 notes · View notes
yayasvalveplay · 1 month ago
Note
IM ON A ROLL BABYYYY I FINALLY GOT MY SHIT FOR MY SHOCKBLURR BABY PROPERLY WRITTEN AYYYY
Anyways—
Tumblr media
Meet Neuralous! One of Shockwave's many children; not only with prestigous intelligence but with a love for Gothic paints and neolights.
Neurolous's identifies as non-binary/genderfluid. They GOT ALL THE PRONOUNS BABY AND SHE IS GONNA USE THEM. SO IF ITS A LITTLE WEIRD THAT SOME PARAGRAPH HAVE DIFFERENT PRONOUNS, ITS INTENTIONAL BECAUSE HE USES ALL OF THEM, but yall can use any pronouns for Neuralous even if it's just one let me know if the writing makes like ZERO SENSE 😭
Neuralous or Void/Noir/Neura as theyre nicknamed by family (to that one anon who said the names, I love your brain so im making them my baby's nicknames ❤️) is a rather standoffish competitive person.
Considering their sire is Shockwave, it shouldn't be surprising.
Growing up, they were taught under Shockwave. Becoming very specialized in coding and neural networking/systems. Since then they have worked under their Sire in the Decepticon Internal Intelligence Division or D.I.I.D as a code breaker and developer ranging from weapons to security.
Void is just as confident in their work just as they are with their looks. Taking great pride with it that, filing/sharpening her claws or doing her optic liner-while her other servo is typing away at super computers in the lab. Occasionally getting caught and reprimanded by Shockwave more than once. Some mechs calling Neuralous a Shockwave Jr with just how focused he gets into his work that when he speaks, his speech will speed up on a topic he's genuinely interested in.
Like his carrier, who he used to spend lots of time, cuddling with his Carrier as Neuralous babbled about coding games on his datapad. Blurr, the only one who can keep Neuralous grounded and humble so his pride doesn't do him in.
Memories including when a baby Neura took advantage of stretchy limbs, mischievous as a cat— sticking their glossa out to a whimpering siblings who couldn't reach for the toy Neura had in her arms that she wanted all for herself. Only to get a stern talking to by her Carrier; nearly bringing her to tears from the sudden guilt. Her carrier wiping them away before showing her how to share with her siblings, tiredly yet so gently rubbing her 'horns' as Neura used to call them. Carrier smiling softly.
Depsite her intelligence and role; she does tend to be a bit of a gossip due to boredom at their desk.which makes other bots tend to undermine her abilities to keep conversations subtle. So Neuralous isn't equipped to become an agent like her parents.
However their expertise has... grown since then, ever since a mech dubbed Trepan had been accepted into the department, supposedly an acquaintance to the second head of the department, Pharma... Things have been happening. A co-worker or two has left, with Trepan scoffing that they couldn't take the pressure as optics aimed at Neuralous
Trepan had asked then more than once in the past if they would be interest in expanding their horizon and put their focus on the neural system of cybertronian anatomy. Trepan had handed them a data of neurosurgery with a sharp smile. The data pad felt so heavy in their servo as they took it, stocking away Trepan's obvious flowery words of tutelage in their processor before leaving.
Neuralous would consider this.
Since then... Neuralous has been... rather strange. Other than moving out into a pent house, claiming that they have a second job now, even a passion project... they have been rather... secretive with the location and what not... telling half truths and lies that the family has been having trouble decoding.
Which is saying ALOT because this is NEURALOUS. She always couldn't help but tell anyone especially her family what's up. Yet; no matter how hard they pry, Neuralous isn't budging, infact the bot pulls away even more when they try to confront her.
Today; Neuralous had just got off from her work from her Sire. Leaving and promptly bidding farewell to her Sire's office before he could say anything... today she was going to purchase some personal items. When her blue optic twitched as she got a message. It was from her... colleague she had met lately they have just finished the formula for their batch.
Until the message was cut off by a call— it was one of their many siblings, from what Neuralous could remember, it was one of their sisters. Nimbus. Neuralous felt their lips curl down as they answered.
"Hello, Nimbus."
They hummed, walking past the shopping center.
"Sorry, Nimbus— the project were working on needed to be extended, my coding got more complicated."
Neuralous's sharp heeled pedes clicked as they went through an alleyway, their paintjob blending them into the darkness before they emerged into lower stairs. The sound of voices from bots getting closer; Neuralous entered into a busy street, worn down vendors and what not. The street was crowded like a pack of tightly knitted soldiers off to battle.
Neaurlous continued until she was at the side as she waited for the crowd to disburse she she can head through, everyone was tired and worn down, the meek lighting barely glaring above them, they pinched between their red optics, blue optic closing as they sighed.
"Look, Nimbus. I'm sorry that I haven't been visiting, I forgot genuinely... you know I've been working overtime... I don't need to? I know—look—Tell Carrier that I'm doing this with my colleague so it's not like I can just back off."
Neuralous checked one of his claws, as he listened to his sister.
With a soft smile on their painted lips, a rare softness... a vulnerability. Faint guilt in their spark.
"I'll do what I can to make it up to Carrier when I get back, I promise... I swear I'll make it up to you too for dealing with my flashy drama... I'll even let him and you do my make up with whatever you guys want! A spa day if you will... Thanks Nimby, I'll talk later. Take care, Bye. "
Neuralous frowned. It was already hard enough to not talk about what they have learned. But to this extent? To their family? They've even been forgetting to visit to take the worries off of their back. Oh how they wished to have either of their parents' ability to keep long term secrets.
He hung up, excusing himself through the crowd, some of them were clear flight frames or warframes. But Neuralous was especially careful for the small grounder frames, their sharp edges could hit one of them straight in the helm even! And Neuralous got their height from a bit of Sire, "Sorry, sorry! Excuse me, I'm just squeeze right through, yeah–" he hissed, scrap— it was difficult with small bots around him, he even nearly crashed into a small bot. A blue opticed bot.
They walked away, as they finally took a breathe to cool down their heating systems... the things they did to get to the safe house was annoying at most. They rubbed their helm as they unlocked and opened the door. Looking around before closing it, Neuralous turned to the mirror they installed. They cleaned themselves up before entering deeper into the safe house— past vials and tubs of chemicals and liquids, past computers, past tubes of fertilizer and dirt.
Making sure the extra room that kept their favorite things locked and tight, adjusting the temperature to keep things moist in there. Where their colleague was in.
She went to her lab. It was time for work.
Later on when it was time to stop. She would unfortunately realize the orange paint chaffed onto her legs. She will look back in her memories, the network pulling information. Ah... she focused too much again.
AJDJFH LET ME KNOW IF THIS MAKES SENSE SOMEHOW ANDJFH FOR THIS GOTHIC DUMBASS
OH OHOHOHOHHO. MION. MION I AM GOING SO FERAL OVER THIS AHHHHHHHH.
BUT ALSO THIS BEING THE THING WHERE THEY BUMP INTO WHEELIE, NOT EVEN REALIZING THAT HE IS STARING AT HER WITH MURDEROUS INTENT.
There is a good chance Wheelie would of followed him, just to see if this was Blurrs place. Or just follow them to get a sense of their routine.
One of those days they'll run him right into Shockwave, and Shockwave will show him where his Carrier is.
Though he has to jugle this, with his gladiator duties, and seeing the two hot and mysterious mech and fem.
but I guess parallel since they are both hiding stuff from the people that they love.
18 notes · View notes
apas-95 · 2 years ago
Note
I kinda wanna shake these people like do u know the good machine learning has done for science?? diagnosing disease with greater accuracy than a doctor?? like, we decoded DNA that would've taken us forever otherwise, it can be an amazing tool, the problem is the devaluing of people and their labor (capitalism)!! my partner is working on a tool for accessible recipes that you can tell it whatever you have on hand, if you need a substitution, religious restrictions, etc. and it'll suggest things to you.
uhhhhm actually sweaty neural networks are only used specifically to make pictures online through Theft.
no but seriously if you tried to talk to these people about like, machine learning tools helping with supercomputer protein folding or in locating tumors in MRI data they'd think you meant 'asking ChatGPT to diagnose you' because they only know the most absolutely front-facing aspects of the technology that got conveyed to them by the exact same popsci 'techbros' hyping it up that they're now shadowboxing against
130 notes · View notes
ecos-syscourse · 26 days ago
Text
I think that people are massively misunderstanding how "AI" works.
To summarize, AI like chatGPT uses two things to determine a response: temperature and likeableness. (We explain these at the end.)
ChatGPT is made with the purpose of conversation, not accuracy (in most cases).
It is trained to communicate. It can do other things, aswell, like math. Basically, it has a calculator function.
It also has a translate function. Unlike what people may think, google translate and chatGPT both use AI. The difference is that chatGPT is generative. Google Translate uses "neural machine translation".
Here is the difference between a generative LLM and a NMT translating, as copy-pasted from Wikipedia, in small text:
Instead of using an NMT system that is trained on parallel text, one can also prompt a generative LLM to translate a text. These models differ from an encoder-decoder NMT system in a number of ways:
Generative language models are not trained on the translation task, let alone on a parallel dataset. Instead, they are trained on a language modeling objective, such as predicting the next word in a sequence drawn from a large dataset of text. This dataset can contain documents in many languages, but is in practice dominated by English text. After this pre-training, they are fine-tuned on another task, usually to follow instructions.
Since they are not trained on translation, they also do not feature an encoder-decoder architecture. Instead, they just consist of a transformer's decoder.
In order to be competitive on the machine translation task, LLMs need to be much larger than other NMT systems. E.g., GPT-3 has 175 billion parameters, while mBART has 680 million  and the original transformer-big has “only” 213 million.  This means that they are computationally more expensive to train and use.
A generative LLM can be prompted in a zero-shot fashion by just asking it to translate a text into another language without giving any further examples in the prompt. Or one can include one or several example translations in the prompt before asking to translate the text in question. This is then called one-shot or few-shot learning, respectively.
Anyway, they both use AI.
But as mentioned above, generative AI like chatGPT are made with the intent of responding well to the user. Who cares if it's accurate information as long as the user is happy? The only thing chatGPT is worried about is if the sentence structure is accurate.
ChatGPT can source answers to questions from it's available data.
... But most of that data is English.
If you're asking a question about what something is like in Japan, you're asking a machine that's primary goal is to make its user happy what the mostly American (but sure some other English-speaking countries) internet thinks something is like in Japan. (This is why there are errors where AI starts getting extremely racist, ableist, transphobic, homophobic, etc.)
Every time you ask chatGPT a question, you are asking not "Do pandas eat waffles?" but "Do you think (probably an) American would think that pandas eat waffles? (respond as if you were a very robotic American)"
In this article, OpenAI says "We use broad and diverse data to build the best AI for everyone."
In this article, they say "51.3% pages are hosted in the United States. The countries with the estimated 2nd, 3rd, 4th largest English speaking populations—India, Pakistan, Nigeria, and The Philippines—have only 3.4%, 0.06%, 0.03%, 0.1% the URLs of the United States, despite having many tens of millions of English speakers." ...and that training data makes up 60% of chatGPT's data.
Something called "WebText2", aka Everything on Reddit with More Than 3 Upvotes, was also scraped for ChatGPT. On a totally unrelated note, I really wonder why AI is so racist, ableist, homophobic, and transphobic.
According to the article, this data is the most heavily weighted for ChatGPT.
"Books1" and "Books2" are stolen books scraped for AI. Apparently, there is practically nothing written down about what they are. I wonder why. It's almost as if they're avoiding the law.
It's also specifically trained on English Wikipedia.
So broad and diverse.
"ChatGPT doesn’t know much about Norwegian culture. Or rather, whatever it knows about Norwegian culture is presumably mostly learned from English language sources. It translates that into Norwegian on the fly."
hm.
Anyway, about the temperature and likeableness that we mentioned in the beginning!! if you already know this feel free to skip lolz
Temperature:
"Temperature" is basically how likely, or how unlikely something is to say. If the temperature is low, the AI will say whatever the most expected word to be next after ___ is, as long as it makes sense.
If the temperature is high, it might say something unexpected.
For example, if an AI with a temperature of 1 and a temperature of, maybe 7 idk, was told to add to the sentence that starts with "The lazy fox..." they might answer with this.
1:
The lazy fox jumps over the...
7:
The lazy fox spontaneously danced.
The AI with a temperature of 1 would give what it expects, in its data "fox" and "jumps" are close together / related (because of the common sentence "The quick fox jumps over the lazy dog."), and "jumps" and "over" are close as well.
The AI with a temperature 7 gives something much more random. "Fox" and "spontaneously" are probably very far apart. "Spontaneously" and "danced"? Probably closer.
Likeableness:
AI wants all prompts to be likeable. This works in two ways, it must 1. be correct and 2. fit the guidelines the AI follows.
For example, an AI that tried to say "The bloody sword stabbed a frail child." would get flagged being violent. (bloody, stabbed)
An AI that tried to say "Flower butterfly petal bakery." would get flagged for being incorrect.
An AI that said "blood sword knife attack murder violence." would get flagged for both.
An AI's sentence gets approved when it is likeable + positive, and when it is grammatical/makes sense.
Sometimes, it being likeable doesn't matter as much. Instead of it being the AI's job, it usually will filter out messages that are inappropriate.
Unless they put "gay" and "evil" as inappropriate, AI can still be extremely homophobic. I'm pretty sure based on whether it's likeable is usually the individual words, and not the meaning of the sentence.
When AI is trained, it is given a bunch of data and then given prompts to fill, which are marked good or bad.
"The horse shit was stinky."
"The horse had a beautiful mane."
...
...
...
Notice how none of this is "accuracy"? The only knowledge that AI like ChatGPT retains from scraping everything is how we speak, not what we know. You could ask AI who the 51st President of America "was" and it might say George Washington.
Google AI scrapes the web results given for what you searched and summarizes it, which is almost always inaccurate.
Tumblr media
soooo accurate. (it's not) (it's in 333 days, 14 hours)
10 notes · View notes
ironsharkkryptonite · 8 months ago
Text
Robotic Thought or affirmation decoded !
For Sp, You might start with “He is madly in love with me” or “I am irresistible.” At first, it may feel empty or forced, but as you continue repeating it, the words will sink into your subconscious. Soon, your mind will adopt this as a core belief, and your reality will start aligning with it without you needing to push as hard.
Benefits:
• Simplifies Affirmation Process: You don’t need to overthink or force strong emotion from the start.
• Bypasses Resistance: Robotic affirmations can reduce resistance because you’re not pushing against disbelief; you’re simply repeating and letting the subconscious do the work.
• Creates New Neural Pathways: Each repetition strengthens the mental pattern that aligns with the affirmation, eventually replacing old limiting beliefs.
Tumblr media
19 notes · View notes
compneuropapers · 11 months ago
Text
Interesting Papers for Week 29, 2024
The time course of feature-selective attention inside and outside the focus of spatial attention. Andersen, S. K., & Hillyard, S. A. (2024). Proceedings of the National Academy of Sciences, 121(16), e2309975121.
The role of uncertainty in regulating associative change. Chan, Y. Y., Lee, J. C., Fam, J. P., Westbrook, R. F., & Holmes, N. M. (2024). Journal of Experimental Psychology: Animal Learning and Cognition, 50(2), 77–98.
Neural correlates of perceptual similarity masking in primate V1. Chen, S. C.-Y., Chen, Y., Geisler, W. S., & Seidemann, E. (2024). eLife, 12, e89570.3.
Timing along the cardiac cycle modulates neural signals of reward-based learning. Fouragnan, E. F., Hosking, B., Cheung, Y., Prakash, B., Rushworth, M., & Sel, A. (2024). Nature Communications, 15, 2976.
Multicore fiber optic imaging reveals that astrocyte calcium activity in the mouse cerebral cortex is modulated by internal motivational state. Gau, Y.-T. A., Hsu, E. T., Cha, R. J., Pak, R. W., Looger, L. L., Kang, J. U., & Bergles, D. E. (2024). Nature Communications, 15, 3039.
A cognitive-computational account of mood swings in adolescence. Gregorová, K., Eldar, E., Deserno, L., & Reiter, A. M. F. (2024). Trends in Cognitive Sciences, 28(4), 290–303.
Probabilistic causal reasoning under time pressure. Kolvoort, I. R., Fisher, E. L., van Rooij, R., Schulz, K., & van Maanen, L. (2024). PLOS ONE, 19(4), e0297011.
EEG decoders track memory dynamics. Li, Y., Pazdera, J. K., & Kahana, M. J. (2024). Nature Communications, 15, 2981.
Dynamic saccade context triggers more stable object-location binding. Lu, Z., & Golomb, J. D. (2024). Journal of Experimental Psychology: General, 153(4), 873–888.
It is not all about you: Communicative cooperation is determined by your partner’s theory of mind abilities as well as your own. Markiewicz, R., Rahman, F., Apperly, I., Mazaheri, A., & Segaert, K. (2024). Journal of Experimental Psychology: Learning, Memory, and Cognition, 50(5), 833–844.
Dopamine control of social novelty preference is constrained by an interpeduncular-tegmentum circuit. Molas, S., Freels, T. G., Zhao-Shea, R., Lee, T., Gimenez-Gomez, P., Barbini, M., … Tapper, A. R. (2024). Nature Communications, 15, 2891.
Space wandering in the rodent default mode network. Nghiem, T.-A. E., Lee, B., Chao, T.-H. H., Branigan, N. K., Mistry, P. K., Shih, Y.-Y. I., & Menon, V. (2024). Proceedings of the National Academy of Sciences, 121(15), e2315167121.
Attention-based rehearsal: Eye movements reveal how visuospatial information is maintained in working memory. Sahan, M. I., Siugzdaite, R., Mathôt, S., & Fias, W. (2024). Journal of Experimental Psychology: Learning, Memory, and Cognition, 50(5), 687–698.
Manipulating Prior Beliefs Causally Induces Under- and Overconfidence. Van Marcke, H., Denmat, P. Le, Verguts, T., & Desender, K. (2024). Psychological Science, 35(4), 358–375.
Top–down modulation in canonical cortical circuits with short-term plasticity. Waitzmann, F., Wu, Y. K., & Gjorgjieva, J. (2024). Proceedings of the National Academy of Sciences, 121(16), e2311040121.
Phasic locus coeruleus activity enhances trace fear conditioning by increasing dopamine release in the hippocampus. Wilmot, J. H., Diniz, C. R., Crestani, A. P., Puhger, K. R., Roshgadol, J., Tian, L., & Wiltgen, B. J. (2024). eLife, 12, e91465.3.
Eye blinks as a visual processing stage. Yang, B., Intoy, J., & Rucci, M. (2024). Proceedings of the National Academy of Sciences, 121(15), e2310291121.
Distinct information conveyed to the olfactory bulb by feedforward input from the nose and feedback from the cortex. Zak, J. D., Reddy, G., Konanur, V., & Murthy, V. N. (2024). Nature Communications, 15, 3268.
Conjunctive encoding of exploratory intentions and spatial information in the hippocampus. Zeng, Y.-F., Yang, K.-X., Cui, Y., Zhu, X.-N., Li, R., Zhang, H., … Zhou, N. (2024). Nature Communications, 15, 3221.
Environmental regularities mitigate attentional misguidance in contextual cueing of visual search. Zinchenko, A., Conci, M., Müller, H. J., & Geyer, T. (2024). Journal of Experimental Psychology: Learning, Memory, and Cognition, 50(5), 699–711.
25 notes · View notes
llazyneiph · 2 years ago
Text
Tumblr media
Cybernetix 2.4 (Nanobots Expanded)
🌟 Exciting Update Alert! Introducing New Cybernetix Enhancements! 🌟
Hey guys! I'm thrilled to unveil the latest update for Cybernetix, bringing a new wave of futuristic possibilities to your Sims' lives.
🕒 Chrono-Enhancer - Eternal Youth Awaits! Tired of the passage of time? With the new Chrono-Enhancer, your Sims can put a halt to aging and enjoy the benefits of eternal youth. Watch as your young adult Sims stand proudly beside their elder counterparts. Embrace ageless beauty and endless adventures, all while time stands still.
🌐 Quantum Jump Module - Transcend Space! Imagine teleporting from one end of the world to another in an instant. Thanks to the Quantum Jump Module, your Sims can now teleport instead of walking!
☀️ Weather Manipulation Module - Control Nature's Palette! (Requires Seasons) Yearn for sunny days or long for snow-covered landscapes? The Weather Manipulation Module grants your Sims the ability to change the weather at will. From radiant sunlight to gentle rain and even enchanting snowfall, your Sims will wield the power to control the weather!
🌿 Eco-Synthesis Processor - Waste to Wonderland! Turn waste into wonder with the Eco-Synthesis Processor. Your Sim can now cultivate a lush oasis of vibrant flora, with their own waste. The cycle of life takes on a whole new meaning!
⚡️ Bio-Energy Harvester - Harvest the Power of Technology! Discover a new way to power up! The Bio-Energy Harvester allows your Sims to draw energy from any technology around them. Zap your electronics dry as you reimagine the concept of energy consumption. 🌬️ Bio-Cleansing System - Breathe Fresh Air Anywhere! In a world where pollution is a concern, the Bio-Cleansing System provides a breath of fresh air. Your Sim can now cleanse the air they intake, ensuring they breathe clean and refreshing air even in the most polluted zones. A clean atmosphere for a clean life!
🔓 Neural Unlocking - Decode, Override, Conquer Computers! Locked out of a computer system? Not anymore! With the Neural Unlocking enhancement, your Sims can unlock any computer with ease!
Stay tuned for more updates and enhancements in the future! ✨
DOWNLOAD
122 notes · View notes
Text
🤔🤯 Can Gallifreyans taste memories?
Ever wondered if Gallifreyans can taste and interpret human knowledge/memories from objects they lick? Well no, you probably haven't, because that sounds ridiculous even for a Gallifreyan, right?
🧬 The Science of Sensory Perception
Gallifreyans are known for their sophisticated physiology, including highly advanced taste receptors capable of detecting complex molecular structures. Among these structures is human RNA, a molecule pivotal in the coding, decoding, regulation, and expression of human genes.
When Gallifreyans come into contact with human RNA, they can translate these sequences into understandable data. Essentially, ingested RNA can be processed into a format that their brain can interpret as memories—just like converting a file to a different format for compatibility with another program.
🌀 How!?
Enzymatic Action: Special enzymes in Gallifreyan saliva or their stomach might be responsible for converting human RNA into a decipherable format.
Neuro-Gastric Link: There could be a direct neural connection between their gastrointestinal tract and their nervous system.
It's a little-known ability that may have to be learned, but hey, now you know. And if you know, they might know too if they lick something you own.
Gallifreyan Biology for Tuesday by GIL
Any orange text is educated guesswork or theoretical. More content ... →📫Got a question? | 📚Complete list of Q+A and factoids →📢Announcements |🩻Biology |🗨️Language |🕰️Throwbacks |🤓Facts → Features:⭐Guest Posts | 🍜Chomp Chomp with Myishu →🫀Gallifreyan Anatomy and Physiology Guide (pending) →⚕️Gallifreyan Emergency Medicine Guides →📝Source list (WIP) →📜Masterpost If you're finding your happy place in this part of the internet, feel free to buy a coffee to help keep our exhausted human conscious. She works full-time in medicine and is so very tired 😴
20 notes · View notes
mindblowingscience · 2 years ago
Text
We're seeing major advancements in tech that can decode brain signals, interpreting neural activity to reveal what's on someone's mind, what they want to say, or – in the case of a new study – which song they're listening to. US researchers have been able to reconstruct a "recognizable version" of a Pink Floyd song based on the pulses of activity moving through a specific part of the brain's temporal lobe in volunteers as they listened to the hit Another Brick in the Wall Part 1. While the tune in question did go through some initial processing into a spectrogram form to be more compatible with the brain's audio processing techniques, the reverse process is impressive in terms of its fidelity. "We reconstructed the classic Pink Floyd song Another Brick in the Wall from direct human cortical recordings, providing insights into the neural bases of music perception and into future brain decoding applications," says neuroscientist Ludovic Bellier from the University of California, Berkeley.
Continue Reading.
148 notes · View notes
scarlineorbit · 22 days ago
Text
Tumblr media
🔐 PANDORA – ENVY DIVISION DOSSIER
SUBJECT: KHALILSON, SIRIUS Call Sign: Graft Clearance Level: ██/███ (Tier-9 Envy Access Only) Dossier ID: PND-ENVY/Δ-HXN14-633
BIOLOGICAL OVERVIEW
Field Entry: Full Name: Sirius Khalilson Date of Birth: 12 May 1989 Height: 5’7” Weight: 154 lbs Place of Birth: Alexandria, Egypt Nationality: Dual – Egyptian / Classified (Naturalized ██████) Species: Human (Genetically Modified) Affiliation: Pandora Initiative – Envy Division Former Affiliation: REDACTED CIA – Covert Operations Division Status: Active / Monitored
INTERNAL EVALUATION REPORT
SECTION I: GRAFT ABILITY OVERVIEW
Classification: Tier-IV Neurokinesis Baseline Genome: Homo sapiens (genetically modified)
A. Neural Manipulation Capabilities
"Neurocartography"—the ability to perceive, manipulate, and extract the neurological "maps" of sensation, memory, and pain encoded within the nervous system, particularly along the spinal column. The “pages” appear in his perception like glowing neural tattoos hovering in space. The longer a "page" is held outside a host, the more unstable it becomes—like a fraying nerve, it degrades and corrupts. Victims may convulse or experience involuntary muscle spasms during extraction.
To Graft, a person's spine is not just a bundle of nerves—it's a living archive. Every trauma, every learned reflex, every scar of experience is encoded in the body's electrical language. He can reach into that system and tear out pieces of it like pages from a book.
Neuro-Spinal Grafting: Subject can extract and imprint pain-response pathways from others via physical contact, particularly along the spinal column.
Pain Transference: Subject can reroute nociceptive signals—emotional or physical pain—from one being into another, or internalize them.
Synaptic Mapping: Capable of creating temporary neuro-empathic links, allowing for the perception and extraction of deeply embedded emotional trauma.
Pain Extraction: Graft can isolate and rip the neural mapping of specific pain memories or trauma, leaving the target numb, confused, or emotionally hollow. Victims often feel a phantom absence, as if something fundamental is missing from their nervous system.
Sensory Hijack: By dragging or splicing these maps, he can transplant pain pathways from one person into another—making one person suffer another’s wounds or guilt.
Neural Silencing: He can temporarily (or permanently) sever connections between the brain and the body—cutting off pain, emotion, or even motor control with precision. This is often used mid-combat to disable or interrogate.
Echo Mapping: When he extracts a pain-map, Graft gains brief psychic "echoes" of the target’s trauma—flashes of memory, fear, or sensation. These moments are invasive, sometimes overwhelming, and often addictive.
"Spinal Bookmarking": He can mark a specific nerve thread and return to it later—revisiting a memory or reactivating pain as a threat or reminder.
B. Structural Integrity & Recovery
Accelerated Tissue Regeneration: Healing factor calibrated to 5.1× human baseline; minor wounds self-seal within minutes.
Adaptive Neurological Compensation: Nervous system dynamically adapts to new grafted data, granting temporary access to pain tolerances, reflexes, or biological instincts from others.
Neuroempathic Binding: When emotionally tethered to another subject, can redirect trauma responses to stabilize or destabilize neural equilibrium.
SECTION II: SENSORIAL AND PHYSICAL ENHANCEMENTS
A. Sensory Expansion
Olfactory Resolution: 45× human range. Utilized for emotional trace detection and trauma scent decoding.
Auditory Range: Detects ultrasonic emissions up to ~64 kHz.
Empathically responsive to distress tones.
Emotive Field Perception: Subconscious detection of fear, grief, and intent via biochemical cues.
B. Physical Conditioning
Subject exhibits enhanced muscular strength and endurance, with peak output estimated at nearly three times that of an average human.
Strength levels dynamically fluctuate based on the current neurokinetic load and graft-induced physiological adaptations.
Reflexes and agility are markedly improved, allowing for sudden bursts of rapid, unpredictable movement in combat or escape scenarios.
Subject demonstrates limited organic mimicry, temporarily altering dermal texture and muscle tone to blend with environmental or emotional cues, aiding stealth and resilience.
SECTION III: COGNITIVE PROFILE & BEHAVIORAL PERFORMANCE
A. Tactical Intelligence
Capable of real-time strategic recalibration under high duress.
Uses empathic-neural input to anticipate opponent behavior.
Avoids direct command hierarchies; prefers independent execution or small-unit autonomy.
B. Emotional Regulation
Subject exhibits alexithymia.
Emotional recognition in others is strong; emotional articulation for self is deficient.
Displays guarded affect but is highly reactive to distress signals.
SECTION IV: TECHNICAL EQUIPMENT INTERFACE
Primary Weapon: Custom-modified compact assault rifle (caliber classified)
Secondary Weapon: Silenced semi-automatic pistol
Melee Weapon: Surgical-grade scalpel set (various sizes, titanium alloy blades)
Additional Tools: Covert tactical knife (ceramic blade), field med-kit for emergency wound treatment
SECTION V: OPERATIONAL SPECIALIZATIONS
Neural Mapping & Emotional Residue Analysis
Memory Bleed Resistance (compartmentalized trauma locks)
Pain Redistribution (neurokinetic absorption and redirection)
Hostage Emotional Stabilization & Triage
Infiltration via Empathic and Neural Linkage
Environmental Tracking through Neural Echoes
SECTION VI: LIMITATIONS AND RISK FACTORS
Identity Disruption: Recurrent neuro-spinal grafting causes dissociative episodes and emotional boundary collapse.
Neural Feedback Syndrome: May suffer from seizures or paralysis due to overloaded neurokinetic transfer.
Emotional Saturation: High-emotion zones can short-circuit empathic buffers, leading to neurological shutdown.
Delayed Communication: In high-intensity neurokinetic states, subject struggles with coherent speech or response.
Biological Overload: Absorbing multiple trauma signatures simultaneously risks permanent nerve damage.
Exploitation Risk: Vulnerable to psychic or AI-driven psychological subversion through emotional resonance traps.
Solitude Dependency: Requires recovery periods of isolation to recalibrate neural thresholds and emotional alignment.
Tactical Rigidity: Functions best in small-cell environments; exhibits resistance to hierarchical structure.
Sensory Overload: Extreme auditory/visual stimuli can disorient or incapacitate subject temporarily.
Evaluation Summary:
LIMITATIONS
Graft demonstrates extraordinary capabilities rooted in neurokinesis—particularly neural pain transference and emotional extraction. However, the invasive and intimate nature of his ability imposes dangerous consequences on his cognition, physiology, and operational coherence.
Cognitive Displacement: Subject risks permanent identity fragmentation through repeated grafting cycles.
Partial amnesia and persona contamination have been observed.
Neuropathic Burnout: Subject has exhibited blackouts and cardiac distress following prolonged neural absorption.
Residual Trauma Retention: Psychological echoes from previous grafts remain embedded. These fragments interfere with present emotional processing.
Pathological Transmission: Subject can absorb and unwittingly internalize mental disorders or psychosomatic triggers.
Autonomy Override: Attachment to emotionally charged subjects often overrides extraction protocol.
Technological Ineptitude: Lacks intuitive engagement with electronic systems under pressure. Delegates all tech-interaction in field.
Summary Judgment: Graft is indispensable in high-emotion, trauma-sensitive environments but poses a systemic threat to mission cohesion if overexposed. Recommends emotional containment handler assignment on all future ops.
SECTION VII: INCIDENT LOG EXCERPTS
Incident #041 – Neural Feedback Loop Date: 22 May 2023 Outcome: Feedback from linked operative caused neurokinetic seizure. Subject unconscious for 17 hours post extraction.
Incident #087 – Hostage Stabilization (Cairo Market Siege) Date: 18 Oct 2024 Outcome: Siphoned panic signals and trauma loops from five civilian children. Subject suffered severe nosebleed and memory distortion.
Incident #104 – Deep-Graft Fugue Date: 19 Feb 2025 Outcome: Located 12 km off-grid in fugue state. Subject displayed partial memory from linked civilian with no debrief record.
PSYCHOLOGICAL PROFILE [CONFIDENTIAL – REDACTED]
Diagnosed with Controlled Empathic Dissociation Disorder Compulsion: Logs emotional trauma in encrypted neural diary Attachment Schema: High affective enmeshment; prefers solitude Trust Rating: Moderate – highly situational Loyalty Tier: ██ (Anchor Dependent) Obedience Level: Autonomous
Watchlist Tag: "If overwhelmed, do not touch. Withdraw."
Behavioral Note (per Division Psych Lead): “Graft exhibits a paradox of hyper-attunement and emotional detachment. While his neurokinetic abilities grant him intimate access to others’ trauma, he remains emotionally non-integrated with his own experiences. He responds to suffering with precision, not empathy—intervening like a surgeon, not a savior. Isolation is his default coping strategy, yet he’s magnetically drawn to pain in others. Recommends close monitoring for emerging compulsions tied to unresolved emotional grafts.”
CLASSIFIED — AUTHORIZED MEDICAL PERSONNEL ONLY
Subject: Medical and Enhancement History Report Operative Call Sign: Graft Date: May 20, 2025 Prepared by: Division Medical and Biotechnical Services
MEDICAL AND ENHANCEMENT HISTORY REPORT
Initial Integration: March 7, 2019 – Cairo Site-09 Procedure: Tier-IV Neurokinetic Grafting Implant Outcome: 93% synaptic retention rate. Subject stabilized after 72-hour sensory fragmentation.
Phase II Enhancements (August 2020 – Berlin Cell):
Neural relay mesh implanted for signal redirection
Pain conduit routing adjusted for enhanced durability
Neurofeedback buffer to reduce overload risk
Complications:
Graft Episode #12 (March 2021): Involuntary bond with rioter led to loss of self-identity for 36 hours. Memories distorted.
Phase III Enhancements (October 2023 – Site-12):
Pain transference refinement module installed
Reflex acceleration node integrated into basal ganglia
Known Medical Flags:
Mild arrhythmia post-graft
Spiking neurochemical levels in zones of extreme grief
Elevated oxytocin retention linked to prolonged emotional tethering
Monitoring Notes:
Subject's neurokinetic field reacts unpredictably to certain biotypes. Continues to refuse emotional suppressants.
Ongoing Precautions:
Mandate psych recovery window post-op (3–5 hours)
Enforce handler presence near trauma-site survivors
Revoke solo deployment in extreme emotional zones exceeding 48 hours
SPECIALIZATIONS
Neurokinetic Pain Transference
Tactical Infiltration
Emotional Field Mapping
Nerve Network Hijacking
Environmental Adaptation
Emotional Signal Disruption
Empathic Covert Reconnaissance
Close Quarters Threat Neutralization
Behavioral Surveillance
LIMITATIONS
Identity Dissolution
Neural Fatigue
Communication Lag
Trauma Residue Retention
Manipulation Vulnerability
SKILLS
Proficient:
Stealth & Infiltration
Close Combat Mastery
Insight
Intimidation
Pain Tolerance
Expertise:
Tactical Foresight
Substandard:
Athletics
Vehicular Operation
KNOWN ANOMALIES
Subject retains sensory and memory data from past contact
Neural echo recall includes sight, sound, and emotional profile
Encrypted trauma logs updated subconsciously
🗂️ MISSION LOGS – CLASSIFIED ENVY OPS
🔻 END OF FILE
"He carries the dead inside him—so no one else has to." – Pandora Internal Memo
4 notes · View notes
beardedmrbean · 3 months ago
Text
Scientists have developed a device that can translate thoughts about speech into spoken words in real time.
Although it’s still experimental, they hope the brain-computer interface could someday help give voice to those unable to speak.
A new study described testing the device on a 47-year-old woman with quadriplegia who couldn’t speak for 18 years after a stroke. Doctors implanted it in her brain during surgery as part of a clinical trial.
It “converts her intent to speak into fluent sentences,” said Gopala Anumanchipalli, a co-author of the study published Monday in the journal Nature Neuroscience.
Other brain-computer interfaces, or BCIs, for speech typically have a slight delay between thoughts of sentences and computerized verbalization. Such delays can disrupt the natural flow of conversation, potentially leading to miscommunication and frustration, researchers said.
This is “a pretty big advance in our field,” said Jonathan Brumberg of the Speech and Applied Neuroscience Lab at the University of Kansas, who was not part of the study.
A team in California recorded the woman’s brain activity using electrodes while she spoke sentences silently in her brain. The scientists used a synthesizer they built using her voice before her injury to create a speech sound that she would have spoken. They trained an AI model that translates neural activity into units of sound.
It works similar to existing systems used to transcribe meetings or phone calls in real time, said Anumanchipalli, of the University of California, Berkeley.
The implant itself sits on the speech center of the brain so that it’s listening in, and those signals are translated to pieces of speech that make up sentences. It’s a “streaming approach,” Anumanchipalli said, with each 80-millisecond chunk of speech — about half a syllable — sent into a recorder.
“It’s not waiting for a sentence to finish,” Anumanchipalli said. “It’s processing it on the fly.”
Decoding speech that quickly has the potential to keep up with the fast pace of natural speech, said Brumberg. The use of voice samples, he added, “would be a significant advance in the naturalness of speech.”
Though the work was partially funded by the National Institutes of Health, Anumanchipalli said it wasn’t affected by recent NIH research cuts. More research is needed before the technology is ready for wide use, but with “sustained investments,” it could be available to patients within a decade, he said.
6 notes · View notes